Goto

Collaborating Authors

 meaningful human control


A Framework for Human-Reason-Aligned Trajectory Evaluation in Automated Vehicles

Suryana, Lucas Elbert, Rahmani, Saeed, Calvert, Simeon Craig, Zgonnikov, Arkady, van Arem, Bart

arXiv.org Artificial Intelligence

One major challenge for the adoption and acceptance of automated vehicles (AVs) is ensuring that they can make sound decisions in everyday situations that involve ethical tension. Much attention has focused on rare, high-stakes dilemmas such as trolley problems. Yet similar conflicts arise in routine driving when human considerations, such as legality, efficiency, and comfort, come into conflict. Current AV planning systems typically rely on rigid rules, which struggle to balance these competing considerations and often lead to behaviour that misaligns with human expectations. This paper introduces a reasons-based trajectory evaluation framework that operationalises the tracking condition of Meaningful Human Control (MHC). The framework represents human agents reasons (e.g., regulatory compliance) as quantifiable functions and evaluates how well candidate trajectories align with them. It assigns adjustable weights to agent priorities and includes a balance function to discourage excluding any agent. To demonstrate the approach, we use a real-world-inspired overtaking scenario, which highlights tensions between compliance, efficiency, and comfort. Our results show that different trajectories emerge as preferable depending on how agents reasons are weighted, and small shifts in priorities can lead to discrete changes in the selected action. This demonstrates that everyday ethical decisions in AV driving are highly sensitive to the weights assigned to the reasons of different human agents.


Reflective Hybrid Intelligence for Meaningful Human Control in Decision-Support Systems

Jonker, Catholijn M., Siebert, Luciano Cavalcante, Murukannaiah, Pradeep K.

arXiv.org Artificial Intelligence

With the growing capabilities and pervasiveness of AI systems, societies must collectively choose between reduced human autonomy, endangered democracies and limited human rights, and AI that is aligned to human and social values, nurturing collaboration, resilience, knowledge and ethical behaviour. In this chapter, we introduce the notion of self-reflective AI systems for meaningful human control over AI systems. Focusing on decision support systems, we propose a framework that integrates knowledge from psychology and philosophy with formal reasoning methods and machine learning approaches to create AI systems responsive to human values and social norms. We also propose a possible research approach to design and develop self-reflective capability in AI systems. Finally, we argue that self-reflective AI systems can lead to self-reflective hybrid systems (human + AI), thus increasing meaningful human control and empowering human moral reasoning by providing comprehensible information and insights on possible human moral blind spots.


Art and the science of generative AI: A deeper dive

Epstein, Ziv, Hertzmann, Aaron, Herman, Laura, Mahari, Robert, Frank, Morgan R., Groh, Matthew, Schroeder, Hope, Smith, Amy, Akten, Memo, Fjeld, Jessica, Farid, Hany, Leach, Neil, Pentland, Alex, Russakovsky, Olga

arXiv.org Artificial Intelligence

A new class of tools, colloquially called generative AI, can produce high-quality artistic media for visual arts, concept art, music, fiction, literature, video, and animation. The generative capabilities of these tools are likely to fundamentally alter the creative processes by which creators formulate ideas and put them into production. As creativity is reimagined, so too may be many sectors of society. Understanding the impact of generative AI - and making policy decisions around it - requires new interdisciplinary scientific inquiry into culture, economics, law, algorithms, and the interaction of technology and creativity. We argue that generative AI is not the harbinger of art's demise, but rather is a new medium with its own distinct affordances. In this vein, we consider the impacts of this new medium on creators across four themes: aesthetics and culture, legal questions of ownership and credit, the future of creative work, and impacts on the contemporary media ecosystem. Across these themes, we highlight key research questions and directions to inform policy and beneficial uses of the technology.


Designing for Meaningful Human Control in Military Human-Machine Teams

van Diggelen, Jurriaan, Bosch, Karel van den, Neerincx, Mark, Steen, Marc

arXiv.org Artificial Intelligence

This chapter proposes methods for analysis, design, and evaluation of Meaningful Human Control (MHC) for defense technologies from the perspective of military human-machine teaming (HMT). Our approach is based on three principles. Firstly, MHC should be regarded as a core objective that guides all phases of analysis, design and evaluation. Secondly, MHC affects all parts of the sociotechnical system, including humans, machines, AI's, interactions, and context. Lastly, MHC should be viewed as a property that spans longer periods of time, encompassing both prior and realtime control by multiple actors. To describe macrolevel design options for achieving MHC, we propose various Team Design Patterns. Furthermore, we present a case study, where we applied some of these methods to envision HMT, involving robots and soldiers in a search and rescue task in a military context.


Experts Believe the World is Nearing its End! Killer Robots will Dominate Us

#artificialintelligence

Weapon systems that select and engage targets without meaningful human control are unacceptable and need to be prevented. All countries have a duty to protect humanity from this dangerous development by banning fully autonomous weapons. Retaining meaningful human control over the use of force is an ethical imperative, a legal necessity, and a moral obligation. In the period since Human Rights Watch and other nongovernmental organizations launched the Campaign to Stop Killer Robots in 2013, the question of how to respond to concerns over fully autonomous weapons has steadily climbed the international agenda. The challenge of killer robots, like climate change, is widely regarded as a grave threat to humanity that deserves urgent multilateral action.


The Future of Artificial Intelligence

#artificialintelligence

Ten years ago, Human Rights Watch united with other civil society groups to co-found the Stop Killer Robots campaign in response to emerging military technologies in which machines would replace human control in the use of armed force. There is now widespread recognition that weapons systems that select and attack targets without meaningful human control represent a dangerous development in warfare, with equally disastrous implications for policing. At the United Nations in October, 70 countries, including the United States, acknowledged that autonomy in weapons systems raises "serious concerns from humanitarian, legal, security, technological and ethical perspectives." Delegating life-and-death decisions to machines crosses a moral line, as they would be incapable of appreciating the value of human life and respecting human dignity. Fully autonomous weapons would reduce humans to objects or data points to be processed, sorted and potentially targeted for lethal action.


NGOs and activists call for a ban on the use of autonomous weapons

#artificialintelligence

NGOs and activists have called for a ban on the use of autonomous weapons that are no longer strictly controlled by human hands, calling the so-called "killer robots" a "threat to humanity". The move comes as the Sixth Review Conference of the Convention on Conventional Weapons (CCW) takes place in Geneva this week, chaired by ambassador Yann Hwang of France. Member states are expected to decide whether to negotiate a treaty that prohibits the use of weapons that are not decisively controlled by human hands. Human Rights Watch (HRW) called for a new treaty to clarify and strengthen existing laws related to these new technologies, adding that "the emergence of autonomous weapons systems and the prospect of losing meaningful human control over the use of force are grave threats that demand urgent action". "These are weapons systems that would operate without meaningful human control. That is, instead of a human, you would have the weapon system itself that would select the target and decide when to pull the trigger. You would not have humans performing these functions, instead, artificial intelligence would replace the soldier on the battlefield," explained Steve Goose, director of Human Rights Watch's Arms Division.


Meaningful human control over AI systems: beyond talking the talk

Siebert, Luciano Cavalcante, Lupetti, Maria Luce, Aizenberg, Evgeni, Beckers, Niek, Zgonnikov, Arkady, Veluwenkamp, Herman, Abbink, David, Giaccardi, Elisa, Houben, Geert-Jan, Jonker, Catholijn M., Hoven, Jeroen van den, Forster, Deborah, Lagendijk, Reginald L.

arXiv.org Artificial Intelligence

The concept of meaningful human control has been proposed to address responsibility gaps and mitigate them by establishing conditions that enable a proper attribution of responsibility for humans (e.g., users, designers and developers, manufacturers, legislators). However, the relevant discussions around meaningful human control have so far not resulted in clear requirements for researchers, designers, and engineers. As a result, there is no consensus on how to assess whether a designed AI system is under meaningful human control, making the practical development of AI-based systems that remain under meaningful human control challenging. In this paper, we address the gap between philosophical theory and engineering practice by identifying four actionable properties which AI-based systems must have to be under meaningful human control. First, a system in which humans and AI algorithms interact should have an explicitly defined domain of morally loaded situations within which the system ought to operate. Second, humans and AI agents within the system should have appropriate and mutually compatible representations. Third, responsibility attributed to a human should be commensurate with that human's ability and authority to control the system. Fourth, there should be explicit links between the actions of the AI agents and actions of humans who are aware of their moral responsibility. We argue these four properties are necessary for AI systems under meaningful human control, and provide possible directions to incorporate them into practice. We illustrate these properties with two use cases, automated vehicle and AI-based hiring. We believe these four properties will support practically-minded professionals to take concrete steps toward designing and engineering for AI systems that facilitate meaningful human control and responsibility.


A Feminist Future Begins By Banning Killer Robots

#artificialintelligence

On International Women's Day, weapons development won't be the first thing that springs to mind for achieving global gender equality. But banning autonomous weapons systems AKA "killer robots" is needed to strengthen global peace, advance human security and ensure a feminist future. Technology could be a benevolent force in our increasingly integrated society. The potential benefits of innovative advancements in the fields of artificial intelligence, robotics, and machine learning could secure our future. As United Nations Secretary General Antonio Guterres said: "…these new capacities can help us to lift millions of people out of poverty, achieve the Sustainable Development Goals and enable developing countries to leap‑frog into a better future."


Europe Poll Supports Killer Robots Ban

#artificialintelligence

"Banning killer robots is both politically savvy and morally necessary," said Mary Wareham, the Arms Division advocacy director at Human Rights Watch and coordinator of the Campaign to Stop Killer Robots. "European states should take the lead and open ban treaty negotiations if they are serious about protecting the world from this horrific development." Countries attending the annual meeting of states parties to the Convention on Conventional Weapons (CCW) at the United Nations in Geneva will decide on November 15 whether to continue diplomatic talks on killer robots, also known as lethal autonomous weapons systems or fully autonomous weapons. Since 2014, these states have held eight meetings on lethal autonomous weapons systems under the auspices of the Convention on Conventional Weapons (CCW), a major disarmament treaty. Over the course of those meetings, states have built a shared understanding of concern, but they have struggled to reach agreement on credible recommendations for multilateral action due to the objections of a handful of military powers, most notably Russia and the United States.